- Home
- Search Results
- Page 1 of 1
Search for: All records
-
Total Resources4
- Resource Type
-
0003000001000000
- More
- Availability
-
22
- Author / Contributor
- Filter by Author / Creator
-
-
Matni, Nikolai (4)
-
Anderson, James (2)
-
Lee, Bruce D (2)
-
Lee, Bruce D. (2)
-
Toso, Leonardo F (2)
-
Zhang, Thomas T (2)
-
Zhang, Thomas T. (2)
-
Hassani, Hamed (1)
-
Kang, Katie (1)
-
Levine, Sergey (1)
-
Tomlin, Claire (1)
-
Tu, Stephen (1)
-
#Tyler Phillips, Kenneth E. (0)
-
#Willis, Ciara (0)
-
& Abreu-Ramos, E. D. (0)
-
& Abramson, C. I. (0)
-
& Abreu-Ramos, E. D. (0)
-
& Adams, S.G. (0)
-
& Ahmed, K. (0)
-
& Ahmed, Khadija. (0)
-
- Filter by Editor
-
-
& Spizer, S. M. (0)
-
& . Spizer, S. (0)
-
& Ahn, J. (0)
-
& Bateiha, S. (0)
-
& Bosch, N. (0)
-
& Brennan K. (0)
-
& Brennan, K. (0)
-
& Chen, B. (0)
-
& Chen, Bodong (0)
-
& Drown, S. (0)
-
& Ferretti, F. (0)
-
& Higgins, A. (0)
-
& J. Peters (0)
-
& Kali, Y. (0)
-
& Ruiz-Arias, P.M. (0)
-
& S. Spitzer (0)
-
& Sahin. I. (0)
-
& Spitzer, S. (0)
-
& Spitzer, S.M. (0)
-
(submitted - in Review for IEEE ICASSP-2024) (0)
-
-
Have feedback or suggestions for a way to improve these results?
!
Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Representation learning is a powerful tool that enables learning over large multitudes of agents or domains by enforcing that all agents operate on a shared set of learned features. However, many robotics or controls applications that would benefit from collaboration operate in settings with changing environments and goals, whereas most guarantees for representation learning are stated for static settings. Toward rigorously establishing the benefit of representation learning in dynamic settings, we analyze the regret of multi-task representation learning for linear-quadratic control. This setting introduces unique challenges. Firstly, we must account for and balance the misspecification introduced by an approximate representation. Secondly, we cannot rely on the parameter update schemes of single-task online LQR, for which least-squares often suffices, and must devise a novel scheme to ensure sufficient improvement. We demonstrate that for settings where exploration is benign, the regret of any agent after T timesteps scales with the square root of T/H, where H is the number of agents. In settings with difficult exploration, the regret scales as the square root of the input dimension times the parameter dimension multiplied by T, plus a term which scales with T to the three quarters divided by H to the one fifth. In both cases, by comparing to the minimax single-task regret, we see a benefit of a large number of agents. Notably, in the difficult exploration case, by sharing a representation across tasks, the effective task-specific parameter count can often be small. Lastly, we validate the trends we predict.more » « lessFree, publicly-accessible full text available April 11, 2026
-
Lee, Bruce D; Toso, Leonardo F; Zhang, Thomas T; Anderson, James; Matni, Nikolai (, Proceedings of the AAAI Conference on Artificial Intelligence)Free, publicly-accessible full text available February 1, 2026
-
Zhang, Thomas T.; Lee, Bruce D.; Hassani, Hamed; Matni, Nikolai (, IEEE)
-
Zhang, Thomas T.; Kang, Katie; Lee, Bruce D.; Tomlin, Claire; Levine, Sergey; Tu, Stephen; Matni, Nikolai (, L4DC - PMLR)
An official website of the United States government

Full Text Available